Oct. 3, 2023

#229 AI and the Future of Data Security: Insights from Rob Juncker, CTO at Code42

#229 AI and the Future of Data Security: Insights from Rob Juncker, CTO at Code42

Prepare to be enlightened as we navigate through the intricacies of data security in the digital age with our distinguished guest, Rob Juncker, the CTO of Code42. This conversation promises to reveal the hidden dangers of our highly collaborative culture, particularly the potential risks posed by departing employees who may carry essential data away with them. Don't miss Rob's insights on generative AI, including technologies like GPT, and their dual role as potential hazards and opportunities in data protection.

 

Our discussion progresses into the realm of AI, its role in contemporary organizations, and the critical need for education around this potent technology. Gain crucial understanding as Rob elucidates the concept of teaching AI models through rewards and punishments, while ensuring the confidentiality of critical data. Discover the importance of managing and effectively adopting AI within organizations to bolster data security.

 

Finally, we shift focus towards risk detection and the innovative ways AI can be employed to create personalized content for users and expedite data analysis. Understand how AI can generate videos from text scripts, delivering key points quickly and efficiently. Rob also throws light on the issue of data exfiltration by malicious entities, the role of a CTO in cybersecurity, the significance of a security mindset in coding, and methods to enable security teams to perform investigations with speed and accuracy. This conversation with Rob Juncker is not just an exchange of ideas, it's an opportunity to deepen your understanding of managing data security in the AI era.

 

More about Rob:

 

https://www.linkedin.com/in/robjuncker

https://www.code42.com

Transcript


0:00:01 - Mehmet
Hello and welcome back to a new episode of the CTO show with Mehmet. Today I'm very pleased to have with me joining live from the US CTO of Code42, rob Juncker. Rob, thank you very much for being on the show. I'm really honored and humbled to be with you, to be with me today. First of all, like I said, traditional question can you just introduce yourself for the folks who doesn't know you? 

0:00:23 - Rob
And I doubt that people would not know you, but nevertheless, Well, first of all, mehmet, thanks for having me on the show today. As a matter of fact, I saw the topic that you were raising today and it's so relevant for all of us. I was quite excited to be able to join you, so the honor today is all mine, first and foremost. For everyone that doesn't know me, I'm Rob Juncker. I'm Chief Technology Officer at Code42. We specialize in cybersecurity, so I get to look at all of the risks that are occurring out there with all of the collaboration that we do. In our particular instance of our product today, we make sure that IT and security teams aren't surprised by data leak, loss or theft that's originating from insiders, and what that really means is that we're securing the collaboration culture. And to your topic today there isn't a single conversation happening right now that doesn't involve the collaboration culture and the rapid expansion that it's seeing with artificial intelligence, especially some of the new generative technologies as well. 

0:01:21 - Mehmet
Yeah, that's absolutely. You know, it's top of mind. As we say Now, before we, you know, we go and deep dive in the AI part. You mentioned something which I try all the time to repeat and sometimes I feel myself people might feel, you know, bored because I'm repeating this, but it's something true. And from your perspective, rob, because you are, I would say, working on technology that solve, you know, this issue data leak right and breaches why it's important that we stop this, and it doesn't matter the size of the company, I want you to highlight. First let's talk about the problem and then deep dive with the, with the generative AI that also, it's bringing another, I would say, dimension to that. 

0:02:16 - Rob
Yeah, you know, right now, what I'm seeing is this incredible groundswell and shift in the way in which organizations are managing their data. And then started with the pandemic, where everybody got sent home and in that moment, we took all the data that was traditionally centralized in our office and in the four walls of you know, our business, and we ended up having to stick it down on everybody's endpoints or spread it out in clouds where people could collaborate easier, and it really changed the face of collaboration. It meant that data could leak a lot easier because you and I if I just even sent a file to you via email those collaboration tools were so helpful. They tell me, you know this file is over 25 megs. Do you want to send them a public link instead? And then he can just click on the link and download the file himself. 

Well, there's so much happening in the ways that technology has changed that data can leak right now easier than ever before, and most of the times, the users who are out there who are leaking data aren't doing it maliciously or intentionally. It's an accidental leak of information where they overexpose or they over permiss a file so that more people can gain access to it than they should Now, as you start going down this path. The reason why it's so important to actually fix these problems and get a grasp of where your intellectual property, as well as your customer data, is is because of the fact that cost of a breach couldn't be higher right now as you begin taking a look at it. You know a lot of this intellectual property is portable from one organization to the next or it could be included in someone else's product if they know how it works in your organization. And that, coupled with some of the other things we're seeing in the economy right now, like the revolving door of employees, where employees nowadays are moving between jobs a lot faster and employees also are feeling that they're entitled to take their information with them. 

So we're seeing more and more people who are a departing employee in an organization who will take their customer data, they'll take their customer contact list, they'll take you know source code with them from one job. 

In some cases they find themselves in a competitor who actually is leveraging the information that the previous organization have, for, you know malicious, if not a damaging purpose is to the previous organization, and you know we sit here at Code42 today and we have the luxury of looking at 180 billion data points over a 90 day span, of how data is moving around, and I will tell you, now more than ever, what we're seeing is that departing employees feel that that is their right. In fact, we're seeing over 62% of departing employees are taking data from their current organization to their next organization. So you know, as we all talk about data leakage, people are always concerned about things like you know customer lists or you know some light intellectual property. But realistically, it's amazing, with the data that we see, how much data walks out that door constantly and if organizations aren't managing it, you could be putting your business at a serious risk of loss of revenue or legal implications by not taking active steps to manage it. 

0:05:21 - Mehmet
Yeah, 100%. It's something also we're noticing at a very fast pace. I would say Now to your point, rob when you just mentioned about generative AI and charge GPT, of course is the thing that comes to mind how charge GPT can be a risk and how it can be, at the same time, something that you know organization can leverage to avoid, you know, data breaches. Thank you. 

0:05:51 - Rob
Yeah, well, let's start off with it's celebrating chat GPT and what it's actually done. First, from a technology perspective, if you're not a CTO and you're not loving what has happened right now around technology, I'm sorry. You're missing the beauty of what's just occurred. Right? And chat GPT is such a fascinating tool because it actually brought to us, as consumers, the realization of how powerful those technologies could be. And as soon as the consumers got a hold of it, pretty soon we all realized that this was impacting us so quickly, right, that that it meant it had to expand and get back into business. And how do how do people leverage it from from that perspective, right? So, first and foremost, I've got to tell you I love what it's done for us. I love the fact that CTOs right now it's getting beyond chat GPT and suddenly we're able to access these algorithms through cloud providers. You know aws and azure and others right, where we can bring that same powerful technology into our businesses. Now, when you start talking about how it's changing the landscape, I mean, you've got to admit I'm not sure how it's not changing the landscape Of the way in which we're working right now. Um, and I'll tell you here, even at Code42. 

We're using ai in a number of different ways. The first thing that we actually did was we launched it internally to Code42. So we created a private chat GPT like service, if you will, where all of us employees could ask it questions. Now we did that for two reasons. One is we wanted to see the questions that people were asking it Right. So, like by us seeing the prompts and what people were interested in, it would tell us where do they think ai could be practical in their day to day Productivity. But the second thing we did was we kind of containerized it so that we weren't training a public model. What I didn't want was my users here running off and using those open models where they were putting maybe some intellectual property in there that the model could actually learn from and, as a result, we could be leaking data out to those generative ai technologies out there. 

Right, and let me just tell you right now everybody here at Code42. I don't care what position you're in you are using some aspect of generative ai, whether you know it or not from our coders here on my team that are out here using the code suggestion tools Like a co-pilot or a code whisper today To our entire employee base, where, you know, we're asking it questions. Now that that internal chat gpt model that I mentioned, we're asking it questions like you know what are our holidays for 2024? And it's coming back and telling us what days are our holidays, and it's saving us in productivity down that path too, so Use of it is widespread. Now the real question is Is what roadblocks do we run into as we adopt this, and then how do we use this in a responsible way as well? Right, and that's what we see a lot of now and that's what we keep finding ourselves asking those questions. 

0:08:42 - Mehmet
Yeah, but what are the risks, rob, like you mentioned, you are using your own. Um, I mean, it's a closed model, it's not the, it's not the public one. Now, if I am today someone who's listening or watching this episode, you know he or she might say, hey, hold on one second, so that's not me. Does that mean there is a risk if my employees are using chat gpt because Maybe they will leak some you know competitive information? Maybe they are sharing something that should not be outside the organization? Can you shed some light on this part? 

0:09:20 - Rob
Yeah, well, and I will tell you. Let's let's break the risks down here into a couple categories right, and the first one I would actually say is the risk of of fear that you know every organization is fighting with. Right, and I'll tell you what the the thing that I ran into right away as we started talking about these technologies Was. I had a number of people who felt like their job was going to be replaced by these technologies, and you know a lot of people immediately, you know, put the brakes on. They're like no, no, no, no, no, don't bring this in. I, I don't want to use this right. Um, but one of the things that we were able to do on that risk was we were able to educate them through it, making them realize that In some cases, the real risk to their job is if they didn't understand how to use this technology or they didn't adopt that technology, and that was a big fear that we ran into right away. Now, that was the first risk we had to clear, and I would tell every cto or anyone else who's bringing these kind of technologies into your organization Talk to your organization openly about how these technologies work. Make sure people understand it. That'd be a best pro tip that I've heard Um in a long time about how to how to begin the implementation now. The second risk, like you said, is what data is going in there and how is that data ultimately being used as an example? These models can learn very fast based upon the way in which they're prompted or the way in which they respond, and you either reward it and give it a treat, if you will saying good job, that's something that was a great response, or alternatively, you say I was a terrible response. You just have one of those ai hallucinations, right, and it learns from that as well, right, and you are right. Like these models, because they're constantly like a human brain and they're learning things. When they're asked a question they don't discern the answer should be or should not include certain aspects of data in it. So if, all of a sudden, I had a product roadmap where I was feeding it into a public model, saying hey, here's some, here's my roadmap. Um, for for what I'm working on right now on our product, what should I work on next? And ask it a question like that, in some cases it'll be like, wow, that was a great idea. But at the same time, it's learning what my roadmap is. So someone else could come along and ask it a question of like hey, what's Code42's roadmap look like? And it would say, oh, you know what? I know the answer to this. It looks kind of like this and it just spits it out, right. 

So for us, when you talk about kind of how do you educate and how do you interact with models, right? Um, the there's a very important reality that we have to educate our users about what data is going in there. As an example, you know we have thousands of customers here at Code42 and we do security work for all of those customers. We we obviously can't feed that security data into an open model because in some cases, one customers data would be sitting alongside some other Customer's data and we've got to be thinking about how do we actually Maintain that private notion of data when, in models like this that can learn and can adapt, it can grow, um. So those are some considerations now. 

The last thing just to bring up as we talk about the risk of feeding things in is you know, there is the desire to educate and desire to get good answers out of these models, but there's also a legal side of everyone's business today, and I think all of our lawyers are sitting out there right now and going how do I, how do I ring fence this? How do I put something around this that makes me feel like my organizational data isn't being shared externally, so that I'm not losing copyright law, not losing trade secrets, things along those lines? And this is where I think lawyers, ctos and CISOs all need to get together in a room. And that's exactly what we did here at Code42 and as a best practice. We all talked about our fears, our concerns, um, and we sat down literally in a, you know, in a conference room with a whiteboard checking off every single one of those concerns, making sure that we've got, you know, some level of remediation around each one of those things. 

And for me, as the CTO, you know a lot of these code augmentation tools they bring in publicly available source code and making sure that I don't all of a sudden put something that is open source in my source code. That could cause me to have to, you know, disclose some of my source code openly is a real concern. So we sat down, we talked about all of those and we went through and checked off that list. Now, the thing that I've got to tell you to mem it is as you were talking about this is you and I have been joking a little bit about you know the risk of users putting confidential data in these models, right, um, and the risk is so real, um, and I think that that's been the other side of this too. 

Is that, um, you know, as we talk about the, the technology, we always talk about three things here at Code42. It's kind of our three T's. It's technology, training and transparency, right, and technology out there, chat gpk is a great tool. The training was the other thing that we had to bring in here at Code42 and we're encouraging every one of our customers and in fact today We've got a product here at Code42 that does training around ai, right, and what to put into those engines, what not to put in those engines. 

And the thing that we realized here quickly is the power of these tools. We're such that we had to spend a decent amount of time telling our users here's what you can put in here and here's what you can't put in there, and suddenly that that data protection policy that we all have. But we talk about the difference between internal, restricted confidential data. We need to make sure our users understood that so that they weren't putting data that might be Customer data, customer secrets and things like that into those models. And that's why we ring fenced a lot of those models that we have today and and really took the time to educate our users and pay attention to how our users are using them. 

0:14:54 - Mehmet
So yeah, this is really a real challenge, rob, and I think, yeah, education is key here because you to your point also about you know how employees are shifting Between companies so fast and because you know covered brought, you know the sculpture of remote work as well, so you can't control really even Maybe they are using their own devices to use you know the, the chat GPT, so it's a huge challenge. But on the other side, you know I've spoke to a lot of CTOs like yourself and a lot of people who Were looking, but also positive about the use of the same technology, to your point, to training, and also not only training, like also to come up with solutions to avoid risks like data loss or data leak and at such things. So where are you seeing the positive sides of the AI tools into the cyber security space? 

0:15:57 - Rob
Yeah, you know that's a great question because, especially in our world, here at Code42, we're paying attention when data is being exfiltrated and trying to understand, for example, is that data sensitive and Does a security team need to get involved? And around that, generative AI is going to play a very, very vital role. Right and reason for it is is today every security team is Short staff compared to the risks that they're managing and in fact, every security team that you would talk to today will tell you that they're constantly in this firefight mode. But the firefight mode consists of a couple things, which is a systems telling them that something's wrong. They're having to go investigate, and then there had many cases having to go through an Investigation of some kind to determine, you know, false, positive or true, positive, and if it is true, positive, what do I have to do at this point to be able to remediate that risk and who do I need to get involved? 

One of the powers of jit and generative AI is we have literally figured out how to recreate the human brain and the way in which we think, and the beauty of this is in those modes where we've got some level of security, poverty, where they are lacking the number of people that they need to perform their job. Generative AI can step in there almost as a virtual security analyst to be able to play the role of the security analyst, right. So suddenly now I see a risk and I've got this risk that's captured and containerized in this, you know, packet of data that says here's what happened, here's why it happened, here's why and here's how. Right, and I can fire that at generative AI and even ask it a question Like based upon the data I'm giving you, is this a real risk? And in some cases, generative. I is coming back saying, well, yes, as a matter of fact, I see a problem with this and a pattern that maybe you as a human wouldn't have seen. Or, alternatively, it could come back and say you know what this looks like, a user who accidentally did something. And while, yes, it's a risk, you don't need to respond with the full weight of your organizational remediation power to deal with this risk. Right, like we know we can. We can automate our way through this right. So what I'm seeing right now is the. 

What's ending up happening is is that we're leveraging generative AI, even in our product here at Code42, to essentially solve this gap and as we sit there today in these organizations have been in constant firefighter by a firefighting mode. We're using generative AI to step in and augment their humans that they've gotten their security center To essentially do a first pass at that remediation, or at least a first pass of that analysis, and then only escalate those things that really need to be escalated. We're giving them a lot more bandwidth, and I think that's the beauty of these generative AI solutions is is we literally take a human attached to a technology like this, or technology attached to, you know, generative AI capabilities, and we turn out superhuman productivity as a result of that. So these security teams are going to be able to scale at levels that were never possible before generative AI. They're going to be able to address risks that aren't those burning risks, but they're still high risks that before they just never had time to get to, and this technology is going to enable all of that. 

It's going to be pretty amazing to see where we go from here and that I will say this to every CTO out there too right now you know if you've got an established product in the market, generative AI is something that, if you're not already doing, some code, red level, exercise around, figure out how it fits in your product. There's a really good chance that someone's going to come along and displace you, and that's the key right now Is that I think all of us need to be asking how is generative AI going to impact not only the work that we do on a daily basis, but the tools and products that we build, that they're by enable customers to and and that's been a key focus of ours as well men yeah, that's great insight from you, rob. 

0:19:46 - Mehmet
Now I know that at Code42, you focus on the insider risk detection and the response. So we talked about the insider risk a little bit, so let's talk about the response. Is there also a role for the AI in the response part as well? 

0:20:05 - Rob
There definitely is. In fact, I gave you the example a second ago of AI being used to do an investigation In our tools. Today, we do have the ability, since we've been incorporating this, to more or less do an analysis of data that's occurring and say is this a risk or is it not a risk? In those moments where we say it is a risk, but let's not throw a full weight of the organization at it, these AI tools that are out there right now are incredible at being able to help in the remediation process, like I literally could create in a prompt right now saying write me an email. I should send an employee who accidentally shared a file too broadly and it would come back out and it would be pretty darn good at the content it creates. So it's gotten to the point now where it used to be. Security analysts would have this pre-canned message of this is what you should do in the future, or they would say this is something that they need to be aware of, but in those moments that they didn't, they'd have to create content. The beauty of generative AI right now is it really allows us to create new content based upon an individualized risk and an analysis that's being done. So suddenly that end user gets super targeted content saying what they did wrong, here's how to fix it, and we can deliver that without a human having to sit there, write all of that information up and actually give it to that employee. Now the other thing which I've got to tell you we're experimenting with right now, and I'm also a huge fan of, is that humans in general are better visual learners than they are reading emails, and these AI chat bots that are starting to generate videos and trainings out of text are getting really good too. We in fact used one of those bots this year for a mid-year training on AI, where we fed it a script and, as we fed it the script, that actually read the script, with a human moving around and doing all of this other stuff, and it was really darn good what it was able to do. 

I'll tell you a story about that. We were doing this training in the final hours, literally before we're gonna push this training out, we realized that there was a mistake in the script that we were feeding into this chat bot that had created this video for us, and within a few hours before the training, we fixed the script. We sent it in. It generated a brand new video for us and we were able to push that out to our entire company. Now, think about that. 

If we were gonna do that in real life and we didn't have that technology, we would have had to gone back to a studio. You do podcasts, you know what that's like editing things out, trying to make it look natural so that you're not distracting the user, right and instead, within 45 minutes, we had fixed the glitch, we had put a brand new video out there and we were able to push that out. Think about that coming to user education, right, and think about, like, hey, create me a script, but then create me a video to deliver to this user. That's less than one minute in total duration. That emphasizes the key points that you need to know for next time. 

And, like I said, that's kind of the power of this generative AI. It's not just about the investigation but, to your point, the changes it's gonna do for us in remediation and the individualized content that it's going to be able to push out here, all without me having to go into a studio to record a video to be able to push that out. It's a very, very powerful concept that is coming and it's going to be here before you, and I even know it. Pretty soon you might not even be having Rob Junker on your podcast. You might have virtual Rob Junker on your podcast, where you're asking a question and this is answering like it's me. 

0:23:38 - Mehmet
I hope not, I need my real guess. Rob, you talked about the it's not related to AI totally now, but about the exfiltration of the data, and this term we hear it a lot, but why is it still one of the favorite methodologies for the bad actors to attack customers? 

But just in terms of how they exfiltrate data or will yes, yeah, so we know that they can act like, the fact is, organizations, somehow they have vulnerabilities and they get in, but then the sensitive data going out, why? Still, it is, you know, a successful campaign from the bad actors that it works majority of the time. 

0:24:31 - Rob
Yeah, you know. So insider risk as a whole. One of the things that we've observed is that, once someone gets inside of an environment, or if they're able to manipulate a user whether it be, you know let's start with the case of someone getting inside an environment. Let's say that there's some malware that makes it through the door, and then they end up on a network with some level of command and control on an endpoint. It doesn't take long for attackers to be able to discover what's valuable data and where that valuable data lives, and this is where things get a little tricky, because if it's an end user who's interacting with data that they typically interact with, let's say, they gain access to a salesperson, and that salesperson, you know, has customer records on their endpoint, as an example. Right, it's not hard for them to gain access to really critical, valuable information that could be used against that organization or sold, you know, on dark web for a decent chunk of money for people to be able to gain some and leverage some data, right. What's fascinating, though, is when you and I think about it, and the challenge with data exfiltration is that data is constantly moving around, right, and in many cases, what's ending up happening is that they're grabbing small files that contain an immense amount of intellectual property or immense amount of customer data. And if you think about the sea of data that is moving in and outside of an organization, the exfiltration of data is running on the fumes of the network traffic. That's actually happening. So it literally is trying to find a needle in a haystack as data is heading out. So that's, our tools, like ours, and others for that matter, play a really vital role, because what we're constantly doing is looking for how data is actually being moved off of an endpoint and understanding where it went. And when we see these odd behaviors, we're all of a sudden, a file you know is being downloaded, let's say, from a source code repository, and then being exfiltrated out to a personal email address, right, we sit there and we say, well, hold on, this is that should not be happening, right. And then we step in and we're able to, you know, stop, as well as remediate any of those risks and also educate users so it doesn't happen again. Hopefully it's not malware, but in some cases have well to go work with other tools to be able to eliminate the malware that's running on that endpoint, right? So that's the real challenge now. 

The second big challenge that we run into a moment is the fact that data exfiltration the ways in which data is being exfiltrated is constantly changing. Right, it used to be pretty simple. Right, it used to be like someone would copy a file off of an endpoint after they gained access to it, to a Remote file share that they had somewhere up in the cloud and they were able to pick up their files and walk away with that. Right, that's not. That's not the script anymore. Right, the script has changed. Right, they're using a numerous set of tools and technologies. They're using websites where you know you can essentially upload files to that, you know, hold on to those files and then it can be downloaded from other locations later. 

I mean, even from a user perspective today, it used to be the most common exfiltration method mechanism that we saw. But someone would take a USB hard drive, right, stick it into their endpoint, copy files off and, you know, walk out the door with it. Right, I will tell you right now if I hand it and I mean this with no disrespect If I handed my 16 year old daughter a USB hard drive, she'd look at me going what do I do with this right, but for her, her preferred exfiltration mechanism is airdrop right, she's just gonna sit there and airdrop from her computer to the phone. And you know, as I talk about those 180 billion data points in 90 days here that we analyze a Code42 airdrop, as an example, has gone up 1000% quarter over quarter in terms of the exfiltration rate of that particular vector, right, and we're continuing to see us be declined, right? 

We're also seeing things like, for example, used to be, people would take it data off their endpoints using SFTP or FTP, for that matter, right. We don't see as much of that anymore. What we're seeing is people who are incredibly skilled and are used to get, as a command line utility, uploading files to get repositories that are personal or cloaked in the cloud and doing it that way as well, right? So the trick here, when you talk about exfiltration data exfiltration is that it's gotten a lot more complex. It's no longer. The script is as simple as when a file goes from point a to point B and it's a cloud file share. 

Data was exfiltrated. Now we're seeing data go off by curl SFTP web browsers that have these, you know, cloaked modes that that's that they're able to upload for. So Organizations are really struggling right now, and I mean it, it's. It's a tough problem to solve, trying to find that needle in the haystack of activity but at the same time, being able to manage that entire breath of Mechanisms by which users can exfiltrate data or, for that matter, a Compromised endpoint can send that data somewhere else that that someone's been able to pick up. So it really has been a it's been an incredible ground change and in the way in which Data is being exfiltrated and the volume of that data being exfiltrated too. 

0:29:33 - Mehmet
So yeah, yeah, I think also like multi-cloud Remote work. Bring your own device, shadow IT, like it. All these, you know, added the complexity over there, right, rob? 

0:29:47 - Rob
Yeah and, by the way, it's funny, you just brought that one up I was talking to a CISO yesterday and the CISO was talking a lot about kind of BYOD and the effect of some of those things and, in their particular case, what was really bad is that they had two-factor authentication Implemented for a number of their cloud properties but, at the same time, the the way in which they managed it allowed people to use their home computers to gain access to their cloud Repositories, and what they were finding was that users were, you know, gaining access to these cloud repositories. 

You know, if it's you know, one of the storage platforms up in the cloud and they would download files from those to their home endpoint. When you think about it, you know, all of us your machine, my machine we set up these perimeters of security around them, but you know, there's no perimeter of security around someone's home machine that's sitting in there in their house at night, right, and they had a long conversation with me about how do you solve that problem, and we had a great, great dialogue around this and hopefully we're gonna be able to help them Bring that to closure here too. So but yeah, that is a real risk right now Is that there's so many devices participating in that data sharing too, that you have to choose tools that are giving you a look at not only just the monitor devices, but also are there unmonitored devices that are participating in any of that data exchange that's occurring, and and where are those, and how do you begin to lock those devices out too? 

0:31:19 - Mehmet
Yeah, yeah, 100%. Now, rob, a little bit also kind of you know, out of curiosity, being a CTO for a company that does cybersecurity, how is it different from being a CTO for I don't know, like when you are in another space, because I know cybersecurity, I've been there before, it's not easy, but because you are the CTO over there, so you're responsible for the technology, for the code, for the you know, and you speak to customers obviously also as well. So how you know being a CTO in a cybersecurity company different than any other place? 

0:31:59 - Rob
Yeah, well, first and foremost, there's actually a bald spot right here. My hair is grayer than it was when I started there definitely is added pressure. 

I mean, it's funny, but we all talk. Let's talk about CISOs for a second. Cisos have one of the hardest jobs in any organization, and there's a reason why there's so much burnout in that role and why CISOs typically last in an organization for a couple of years and then they move to another organization, just because you know they're constantly under pressure and they're constantly feeling that. And in a lot of ways, it's great because, like, I feel like I'm a little bit of a CISO at the same time, because my product has to take into account the risks that they're dealing with on a daily basis and, candidly, a lot of people on my team, especially on some of the teams that specialize in data exfiltration, we feel the pressure to never miss an event or never miss a risk, right? The last thing we want to have happen is data be exfiltrated under our watch, where that's going out, and you know, ultimately, an organization is, you know, threatened by that information that's leaked and the cost of that breach that they're having to deal with, right? So when you say that, though, one of the things that I will tell you that's different is security, for me it's not just a way of life, it's also a cultural way of thinking here at Code42. So in a lot of organizations, as a CTO, you're responsible for making sure that your architecture is good, you're making sure that you're delivering product, you're making sure that you're pushing releases out effectively, not having a lot of defects, if you will, in my world, what it means is I constantly have this security mindset and, to that end, my CICD process is here. When I build code, it's not that I'm just building code and running tests against that code. I build my security controls into my code, which allows us to be able to push these releases out and making sure that we're constantly in that security compliance to make sure that we don't have vulnerabilities, to make sure that we don't have risks. We're constantly code scanning ourselves to make sure that there's no way that we could be, you know, part of an attack on any organization. And that security mindset makes it into the way in which we write code here at Code42, which is like, as you said, it's a little bit different than some of the other organizations who think about securing their product after it's delivered to market right as an example. We think about security through the entire lifecycle of the process, but, secondarily, the other thing that it does is it really brings pressure for me and for our teams here at Code42 to make sure that we also are empathetic to security teams and what they're going through, and at the end of this, it's so important that we're, you know, making it easier for those security teams, allowing them to do investigation faster, allowing them to remediate better, making sure that risks that they close could never be bypassed through rules and exceptions and things along those lines, and you know we take that responsibility very seriously and Memento, I think there's I candidly, I, growing up, I was a hacker. 

Right In third grade. I probably should have called prison and made reservations, but luckily never got caught doing the hacking stuff that I've done. But I've always been in cybersecurity as a CTO my entire life and my professional career, right, and if there was a switch on the wall that I could flip and turn off the darkness, believe me, I would flip that switch. I would change industries, right, I'll find a different way to use my abilities right, other than cybersecurity, but the reality is that switch doesn't exist, nor will it ever exist, and cybersecurity risks are here to stay. It's a cat and mouse game, though, where we, the people who are solving these risks, have to stay ahead of the attackers that are trying to exploit them, and it's a big job because you know things change quickly. So but it's exciting. I love it. I mean, I drink plenty of coffee too as a result of that. 

0:35:58 - Mehmet
Yeah, to your point, rob, about you know Seizos, I speak. I speak, you know, because now I'm kind of an independent guy, so they don't see me as a vendor, so they speak freely to me, right, you know. You know my greetings and regards to every Seizos, whatever he or she is. These guys life is tough, really tough. They are under pressure all the time. You know they are. 

You know the whole company future is is depends on their, on their, you know strategies and you know the way they act and also, at the same time, you know people like yourself, rob, and you know many other. You know leaders in the cybersecurity who tries genuinely to solve this problem that Everyone knows it exists but people don't don't figure it out until it happens. So, and you know I was just before we we start to record. Of course I know about Code42 but, like I said, let me just have a look and really Congratulation, rob, because I've seen like big names on the website, yeah, who are part of your customers. I've seen snowflake, I've seen the crowdstrike and octa. Really congratulation on that. 

And you know, for me, as someone who comes from a technology background, for me, every single product that solves a problem is something that deserves to be looked at. So, guys, have a look at Code42. It's, it's a great solution. I heard about like long time before, by the way, so someone was telling me a couple of years back about, you know, insider threat. At that time, you know what is inside the thread by doing. 

0:37:47 - Rob
I know it's funny how far it's coming, by the way. Thank you, and something just to kind of share with you. And I really appreciated your comments a second ago about CISOs and I mean it's great because as an independent, I can see that you're, you know, able to have those Conversations, because most of the time there's this, this friction that exists between a CTO and a CISO, or the CTO is like I got to get product out, I got to get product out, I got to get product out and CISO See, now got a secure, it, got a secure, it got a secure, it. You're like those two things sometimes conflict and I am blessed and I mean that in all sincerity with my CISO that I have here, jd Hansen. She is a amazing CISO to work alongside of. And if there's one other bit of advice I would give to every product company out there today, regardless of what industry that you're in is, I believe that the CTO and CISO need to bond in such a way there's no gap between those two functions in an organization and if you create a great relationship with your CISO, you know the two of you can solve just incredible problems together and I've had that luxury here at Code42, and I'm again blessed with my CISO and what she's brought to the equation. 

The other thing is is that thank you also for you know, as you point out, Code42 and some of the relationships that we have, and one of the things I do love about Code42 is, yes, we've got a great customer list, but, more importantly, if you look at our customers, you mentioned a whole bunch of them that are in the security business and they use Code42 Even when they have a security product right alongside hours to be able to solve the security equation. 

And what this means is that I've got fantastic Relationships with CTOs and many of those organizations that you brought up and the other ones that are on our website, and our true win, in my perspective, is seeing all of those security companies use Code42, because it means that we've got a great ecosystem, we've got a great group of people that we work alongside and we're recognized by our peers security industry too. So it's it's a lot of fun with the work that I get to do, and now generative AI is really throwing quite the Capabilities, as well as challenges, for us to be able to manage that as an exfiltration vector and understand those risks and you know again, we're really proud about those steps that we've taken to advance our roadmap there and deliver an experience for our customers where we can manage those risks too. 

0:40:01 - Mehmet
Yeah, and let us not forget our friends, because I have a lot of them the CIOs as well, because you guys because you guys also, you take care of all the operations and and you have to talk to you see, you have to talk to your C's also, to your CTO also as well. So definitely, I'm by strong, because I used to be on the other side of the table, yeah, like ten years back. So I Understand this. I really enjoyed the conversation today. Was there anything, rob, that you wish I have asked you. 

0:40:35 - Rob
No, I would just come back to it again. I mean, I think that all of us right now are living in an exciting time, and the reality is is that these generative AI technologies have advanced so fast, so quickly and, for what it's worth, I don't even think we've seen that near the tip of the iceberg, right of what's what's coming underneath it. Right, I think that these technologies are going to continue to push boundaries and business and offer us new opportunities. And for those of us that are embracing these moments and taking these moments to secure those technologies the right way, there's huge upside in in our capabilities here and the way to grow our business and our innovative capabilities that we can drive to our customers too. 

And I also just want to say thank you for having a dialogue on it, because so many of the podcasts that I have hopped on and so many of the conversations I've had on this, it's all been fear-based, like, oh my gosh, could this happen? Oh my gosh, could that happen. And I loved our conversation today because you focus so much on the goodness of this and I think it's important people here that we've got to turn the corner. We've got to not be afraid of these technologies. We've got to find the right ways to embrace them, bring them in and leverage them too. So thank you for that. 

0:41:43 - Mehmet
Oh, my pleasure. And actually, you know it's not like because I'm fan of his special technology that I don't bring you know the fear, but it's because you know I repeated this on the show multiple times Like, any new technology has its own positive sides and negative sides, and they are is no different, right, and if you want to think about it, you know, always in negative way we could have said, oh, you know what fire, which is a technology it's? It's dangerous because we can burn ourselves, like right now. You know, when people show, you know a lot of my guests, they give the same example. Maybe. We know when people start to use cars and they ditched, you know, using the Horse carriers and so on. We have a lot of examples and we can see, you know, the the potential of this technology on the positive side and like, yeah, we need to be careful, we need to be, I Would say, responsible in using the technology and like any other technology, even cyber security, and you mentioned yourself, rob, like it's just the switch, right. So it's like you can decide to be on the bad side or the good side, and hopefully many people will be on the good side. So that's right. 

I Really also enjoyed. Rob, like it was really fun to have you today and have all your insights. I really appreciate it. And, rob, like, of course, I will put you know the company's website in the In the show notes as well, and if anyone has questions to Rob, you can come back to me. I will pass that to to Rob. And, as usual, guys, this is the way I add my episodes. If you have any feedback, any questions, don't be shy, don't hesitate to reach out to me. Also, if you are interested to be a guest on the show, please reach out and you can address this. 

Thank you very much and we'll be back in very soon. Thank you, bye, bye. 

Transcribed by https://podium.page