Beyond Tech: Building a Security Culture and Navigating AI's Impact with Dakota Riley
TLDR;
- Having a strong engineering culture helps with alignment of Security objectives. Focusing on the problem vs solution is key for strong security implementation.
- To address cultural security challenges focus on Small Wins and build from there. Also, take risks vs a risk averse approach to experimentation.
- Secure SDLC is more about better security practices like Threat Modeling or Secure Architecture review than tools. Tools help fix the gaps, but they don’t fix the core issue in an organization.
Transcript
Host: Hi, everyone. This is Purushottam, and thanks for tuning into ScaleToZero Podcast. Today's episode is with Dakota Riley. Dakota is a cloud security and automation enthusiast. He also dabbles in application or product security, software development, and building tooling around it. Thank you so much, Dakota, for joining me today in the podcast.
Dakota: Happy to be here.
Host: So I know that I did a very short intro. Do you want to add anything to your journey? Like maybe how did you get in or what keeps you exciting in the security space even today if you want to add anything?
Dakota: Yeah, I guess not to give the whole timeline, but real briefly, I got started with computers early as a kid, you know, playing computer games and then, you know, finding fun ways to break them or modify them. In high school, I was running, like World of Warcraft private servers just for myself and my friends. Don't tell Blizzard. And then, you know, eventually I was like, hey, this is like a three tier web application. Technically it's got a hacked back into the front end and all that stuff. And I started to dabble and learn.
And then I, I took some technology classes in college and then I realized, hey, you can make a career out of this. And it just kind of fell together. Then I basically worked as a systems administrator through college, just lots of automation, lots of like kind of like typical IT tasks. We got a job right out of college working at CyberArk, which was basically bootstrapping all their kind of privileged access management software, encountered cloud. And I was like super fascinated by the idea that you could just spin up infrastructure.
And it was funny, in an engagement, the customer had coupled a very on-prem style process of managing infrastructure. Basically, we would schedule these engagements three or four weeks in advance, and then they have to set up all this stuff beforehand. But that didn't happen. There was a changing in the hands, but they were in the cloud. They were in Azure. And it's driving me insane, where I'm like, just click the Launch Instances button. Let's beg for forgiveness and save the company $40,000 professional services fees.
And after that, I was like, I got to get into cloud. And I ended up getting a job at AWS as a cloud security consultant where I was just tossed into the deep end. I specialized in basically just automation, kind of auto remediation at the time before all the major tooling came out, like kind of baking security into pipelines, building tools, building automation.
And from there, it's just been kind of a wild ride. I've worked at startups, I did a stint in the kind of public sector adjacent ecosystem that was also like technically a startup, but we were working with like public sector customers. Today, my day job, I work at basically a SaaS company as a staff security engineer.
I also get to share my passion for security, cloud security and automating all the things by teaching with SANS and...
That's me in a nutshell, just wearer of mini hats. I love to learn, love to share, and love to build and break things.
Host: Awesome. So you mentioned a few things. One thing you mentioned about CyberArk, you must have heard about recent acquisition of CyberArk, right? I'm not sure how you feel about that. And the other thing I think you mentioned about the AWS Security Consulting team, were you working with like Chad or Kailash folks like those when you were at AWS in the security consulting because they were like previous guests in our podcast as well.
Dakota: Yeah, so I guess we'll start with the CyberArk and Palo Alto acquisition. I think it's really interesting because Palo Alto is just on an acquisition spree where they acquire a bunch of cloud security tools. I think it was like Prisma. Sorry, it's Prisma now. It was RedLock before and all that. And now they're acquiring basically all of CyberArk's identity suite, which CyberArk has more than PAM, too, right? But I think what's going to be really interesting is there's some newer players in that space, too.
Strong DM, for example, where they're kind of building it more from like a cloud native angle. It's going to be interesting to see like, will customers go towards those types of vendors or will they try to go to platform play with Palo Alto now? I have my personal opinion, but I'm also like, I always tell people, like, I'm an engineer. If you go ask a CISO or like someone signs the checks, they might give you a different answer. I prefer to get like best in breed solutions and then integrate where we can.
Otherwise you end up trying to make this big clunky platform fit to you. Usually their innovation slows. Not saying that'll be the case. That's my opinion. Anything could happen. It'll be interesting to see that play out. And then, so the second part, I do know Chad. We didn't work directly together. It's such a small community. He was also on the professional services team. I was on the, we called it the global security transformation team, which they've reorg'd like a ton of times now.
Global was supposed to be that we build out basically kind of like shared practices and tooling and stuff like that. But I personally was like, I want to go work with customers as much as possible. So I ended up doing that anyway. But yeah, no, no Chad. That's awesome.
Host: Awesome, awesome. So we have some common threads. So yeah, that's a great start. So with that, let's get into the topic of the podcast today, right?
So we want to focus on the culture and mindset of security engineering. Like you have worked with many customers, you must have a lot of learnings, which hopefully we can learn from you today. So let's begin with the core of this discussion, right? In your view, what are the top three elements of a strong security culture?
Because when you say culture, there is no checklist that you can go through and you have a security culture. It takes a lot of time to build. So what are some of the top three elements that you look for when you're trying to build a strong security culture, security engineering culture?
Dakota: Man. I mean, this is going to sound silly, but in my opinion, good security is just good engineering, to be completely honest. I feel like we got away from that. Those things aren't thought about when we're building products, or they're just an afterthought. I think it really comes down to empowering both engineers on the product side and engineers on the security side with a sense of ownership. You don't want this all.
Hey, I'm at the top. I'm going to decide the exact SIM we're going to use and it's going to trickle down and it's on you to figure it out. Because the reality is like your, your engineers are the ones in the trenches, right? They, they're going to be closest to the problems that you're seeing. And, and some could argue too. Well, it's like, well, they don't have the business context, but it's like, but why not empower them with that? Right.
So I don't know if I could boil it down to three, but I think a lot of it just comes down to like creating a sense of ownership, right? Like you're responsible for the thing you create. And even on the security, I don't think, like, I'm not a believer that like, oh, shift left goes away and the need for a security team just goes away. Cause that's not true either. Cause there's going to be things that need to be done at a platform level. You're always going to have like the regulatory side of things that has to get handled.
But even on that side, instead of thinking so much about, oh, we need this acronym alphabet soup. Like we have to buy CASB, we have to buy CSPM, you should think backwards or work backwards from the problems you want to solve and use that to decide this is what we're going to do. And then the answer could be, yes, we're going to buy a product. Yes, we're going to build something.
And it might sound obvious, but I don't really think that thinking is as common as people think, especially the type of company you're at I'm really thankful that I got to work with, you know, I got to work at AWS. I worked internally as Amazon for a customer too. So I get to see how they do things. I've worked at a couple of different startups. And like from there, it's like you're working from nothing. So you have to think that way, right?
But yeah, I think it's really ownership, you know, like really empowering the engineers, the ability, like the ability to make decisions, own their things, and even potentially have to, in some cases, the consequences of not doing the thing, right? Don't shield them from it.
And then just work backwards from the thing you're trying to achieve, be it security or not.
Host: Yeah, so you touched on three things, even though you said that maybe I won't be able to say three things. So the first thing was, I just want to clarify. So you said good engineering is somewhat equal to good security, right? When you say engineering, are you saying the engineering which focuses on building products or you are saying about the security engineering? Because there are sort of two roles in a way, right? Security engineer and then the engineer.
Dakota: Yeah.
Host: I'm assuming that you're talking about the regular engineer who is building new business capabilities.
Dakota: I would honestly say a little bit of both. When I said it first, I intended product engineering because a lot of security today is just chasing after all these things that weren't thought of in the design and so on and so forth. But I think the answer is really both. I think a security team their job needs to be to empower those teams with this is the direction you need to go or these are the things you need to do or here's the paved roads that you can follow.
But ultimately, that falls in, just build it. Like security is an aspect of software quality, right? When we think about it, it has to be built in. Otherwise you're just chasing it around. You're buying 50 products to clean it all up and to chase the mess around, right? My mindset is usually like, hey, instead of, you know, having to deal with this problem, what if we could just eliminate said problem altogether? Yeah, exactly.
And I know it's not always that easy, especially enterprise level, right? Legacy systems don't just go away. Things at scale get challenging. But as a security engineer on a security team, I want to focus on those problems and not the silly hygiene problems, because those are way more intellectually stimulating and interesting than, hey, my app allows hard-coded passwords with no MFA or something. That's kind of what I'm getting at.
Host: Yeah. And you are spot on, right? When it comes to enterprise, often the conversation comes down to ROI, right? If something is working, why do I need to upgrade? Why do I need to do patch and all of that, right? And that opens up Kana firms and things like that.
The second point that you touched on is on ownership, right? And I don't think it's just about security, right? It's about even engineering. If you give ownership to a team member or team, they'll do their best work, right? Versus just following their checklist of things that I need to do these five things and I'm done, right?
And the last thing that you touched on is around focusing on the problem rather than just buying a tool or buying a solution and trying to fit that in. And that also applies to engineering as well, right? Like you should focus on the problem and work from there rather than looking at a CSPM solution and trying to fit that into your security program. So yeah, all of those makes a lot of sense. Now, the follow-up question is, why is it a challenge for organizations to achieve this? Like, do these three things, let's say.
Dakota: Man. I really think it depends on the type of organization, right? Because I think a lot of startups and kind of cloud native companies are leaning into that mindset from the start where instead of, you know, preparing for this massive security organization from the start, they hire kind of a more lean team of engineers that are T shaped and can fill multiple roles. And they have those kind of builder skills where they can fill gaps as needed.
I think where it gets tricky is on the enterprise and regulated side, where those rules and responsibilities have already been cast and laid down. And there's always that bit of organizational inertia. And then on top of it, I'll pick on, I'm trying to think, for example, Scrum, or systems that are used to communicate up what your team is doing to organizational stakeholders.
Some of that stuff can kind become performative. And even not only on top of that, like also the, I guess the element of, we don't trust our engineers to make these decisions. These need to come from leadership and trickle all the way down. So it's very, very hard to just, to just shift from, you know, like that type of kind of bureaucratic top-down structure to a more like engineering lead one.
You know, for example, just having psychological safety. Because the other thing too is, not to go in a little different direction, but if your engineers think they're going to get fired for making a mistake, there's just going to be zero innovation whatsoever, attempts to improve things. When you're changing things, there's always risk.
Now, going back to the ownership piece, you should, like, if I'm an owner of something, like if I'm the one that's going to get paged because I wrote a poor detection, I'm much more incentivized to fix it. Right?
It is very hard. Literally, you're talking about cultural transformation. I don't have the best answer. think it's like you have to take the small wins. And just like, hey, this one project that an engineer researched on their own, show the benefit, show the value, show the cost savings, or the risk mitigated, if you can, from the business level. But it doesn't happen overnight, because you have to challenge those existing bureaucracy structures.
Or get people to believe that, hey, engineering doesn't need to be micromanaged, they need strategic direction. They don't need to tell you what framework to use specifically. They need to know that these are the things we should optimize for. So I don't have a great answer for that, but I do acknowledge it is incredibly, incredibly difficult.
Host: I think you tied it back to what you said earlier, right? Like it's about the looking at the problem than the solution or one of the things that you mentioned about small wins. Absolutely, it makes a lot of sense, right? Like you can show progress in that case, but when it comes to enterprise, you always get into a big, big bang implementation of a new security program and things like that. That often creates that friction, right? Rolling out a new security program.
One of the things that you touched on is the size of the team sometimes matters, right? How the culture shapes up and how security engineering is set up. We spoke about enterprises a little bit. So if I take that to a startup, often startups are trying to move fast and roll out new capabilities and things like that. How do you see that different from a large enterprise?
Do you see the challenges being different or do you see the same challenges even at a startup level?
Dakota: Definitely different. also, mean, you know, a startup is not a startup, right? You you've got startups that are basically like super, super early versus like one of these scale ups that just have an IPO, right? I think the challenge, like, you know, the challenges of startup is like, you absolutely cannot get it in the way of delivery at all, because like the business like literally might not survive if you do.
Not that you should get in the way of delivery at an enterprise either, but I think it's easier to kind of just shove that away because we're big enterprise, you won't feel the direct impact of that. The other thing too is with startups is it's tough because it's like you have less resources, you're not going to get a team of 20 people, which by nature usually means they look for people. They do hire security people, they look for people with hands-on skills and can think about the problem and whatnot.
Your enterprise, yeah, sure, you might get a large amount of team of people, but it's also like that has its own problems too, because now you have this team that you have to manage communication. Communication might break down. Do you create silos? So definitely, definitely very, very different worlds. Having been in both, one, at the startup, it's like I could just go do something for the most part, create a pull request.
Obviously, if I was going to create a pull request for somebody's application. I talk to them and give them context, get their buy. And that's a little different. at the enterprise level, usually there's multiple stakeholders, especially if you're to do things cross-team, which is the most impactful. If you're a security team, you want to change the platform level, but you don't have ownership of that platform, that becomes very complicated. So it's like one.
You're not as constrained by people and you know, basically, cross team ownership, but like you're constrained by what you could do. The other, the constraints are very different.
Host: Yeah. So now the question is that, so now that we understand that there are different challenges and things like that, how would you address the challenges if you are solving for an enterprise or you are solving for a startup? What advice would you give to, let's say a startup security lead who has just joined versus an enterprise CISO?
Dakota: Man. So for the startup, I think the questions you got to be asking, like if you're the first security hire is like, what are the bad things that could happen to the business that would literally cause it to go out of business or cause irreversible damage? Right. And that could come up with different things too, because it also depends on why they hired you. Right. Some of it could be like, hey, things are getting out of control.
We need to kind of really rein them in. It could be that they're getting requests from some of their customers that they need to meet certain regulatory requirements. So I think you got to dig in and figure that out and then use that to construct. These are the things I need to do and build a roadmap from that. At least that's how I'd approach it.
Enterprise is so tough because it's like to me, I think a lot of like enterprise, comes down to like how you align and structure your teams and what incentives you create for them to perform, right? And I don't have a great, cause I've never built an enterprise security team, I've been on them, or helped or worked with them. I've never constructed one from scratch.
But for example, if you're on a team that's around maybe detection and response, it's probably important for you to be deeply involved in the logging processes and the building of those detections and responding to those detections. If those are siloed across different teams, it sounds trivial, but if they are, then that becomes really hard. If you can't self-serve on basically adjusting certain fields in your logging pipeline or something like that.
You have to go to another team. That creates friction right there. And I think it even goes down to getting I think I'm not trying to figure out which direction I'll go. I guess I would also say, too, I think really how you communicate with teams outside of security is really, important, too. Because a lot of security programs, especially that level, are teams that create work for other people and other teams.
Unfortunately, we don't like to acknowledge that fact. So basically, having ways where you can front load that work for those teams, having ways where they can self-serve, having a paved road before you basically, you know, for lack of a word, yell at people about a misconfiguration or something, that's important too. So I think it comes down to how you structure the teams and how you get that kind of thinking holistically of a problem. Like instead of just being, hey, we cut tickets for vulnerabilities.
It would be more like, no, we pull in all this vulnerability data, misconfiguration data. We look at the things that we consider to be real risks. We auto-fix things where we can. I know that's a dangerous word, like auto-fix. We can talk about it later. But we try to build pave roads and auto-fixes where we can. And when we do involve you, we give you as much information and context as you can to self-serve on the problem. But that's really hard if your team's very siloed.
Right, like if your team's goal is to cut tickets, for example, you're going to cut tickets, right? If your team's goal is to remediate risks, which is a nebulous word, it gets a little better. So I think it's all about the incentives and how you structure your team when it comes to enterprise security teams.
Host: Yeah, yeah. So what I'm hearing is for startups, it's often goal driven, right? You have hired your security engineer because you have to have a secure SDLC practice or you need to have vulnerability management practice, something like that, which you want to start with. So you focus on that. But when it comes to enterprise, it's a different beast in a way, right? What are the practices you have today? Is it siloed? Is it central? Based on that, you have to sort of change your tactics.
Now the question… follow up question to that is this all comes down to budget, right? Like somebody has to approve a budget so that you can hire people, you can buy tools, you can have all of this in place. And that will help set up the right culture and things like that. So we got a question from Ashish Bhadouriya that what are some of the KPIs that you would recommend to track so that you can show progress that there is an improvement from a security culture perspective in an organization.
Dakota: Oh, man, that's a real tough one. I'm going to do my best to answer it because KPIs are so slippery because it's like if you only look at just the number, right? Like a number by itself is not context, right? There's always a reason behind the number, why that way is, or more data or more context behind that. To me, I would almost look at it like what kinds of problems, whether your detection response, the vulnerability management, infrastructure security are coming into your team that are unhandled.
And are we really re-solving the same problem over and over again, or are we solving new problems? So, and I've seen other people refer to this, like, you know, the, I think it's the Google SRE handbook. They talk about this concept called toil. And I worked at a startup where they were like toil, anti-toil, anti-toil everywhere. And toil is just this word for like, just undifferentiated heavy lifting. It's the work that's boring, doesn't do anything to add value to the system.
To me, I would be measuring that because it's like, we just resolving the same problem over and over again? Or are we building paved roads for like, hey, secure container-based images, secure cloud environments, log ingestion, those things. Those things are toil. We should build a repeatable thing that can be used over and over again.
And then we're getting pulled in on the more interesting things. So that's not really a KPI. I guess it could be like, I'm trying to think of a best way to boil that down to it a KPI like. Yeah.
Host: Yeah, it's more like how many new capabilities you rolled out versus tech debt, right? How much tech debt did you do re-architecture of something versus did you ship out a new form factor or something like that? Like at the end of the day, what has changed for the users versus you did internally? I'm just trying to simplify it. I know that it doesn't match exactly what you're saying, but something like that, right? Like creating new value versus maintaining existing things which adds to the toil.
Dakota: Exactly. Exactly. Yeah. Yeah. I'm sure there's some actual KPI out there that kind of encapsulates that, that somebody might know. But like, that's what I would go the direction towards, like toil versus new capabilities and work and solving novel problems, like the problems humans should solve.
Host: Yeah, yeah, makes sense. So we spoke about security culture a little bit. Early on, you touched on good security. Good engineering is good security in a way, So, and one of the things that happens when it comes to engineering, often security is looked at at a later point. That, we'll do the design, we'll do the development, before we roll out, we'll ask security, can you just say, yes, this looks good?
or something like that, right? So that's where the shift left movement started. And there is a lot of like DebsaCops practices that organizations are putting together. How do you see that being done effectively when it comes to DebsaCops rather than developers or engineers looking at it as a roadblock or a like security becomes a gatekeeper to our product rollout or something like How do you approach that?
Dakota: Man, there's a lot of directions we could go here, but I'm going to start off by saying that I think a lot of people got caught up in the kind of product boom where all these vendors jumped on and we're kind of using DevSecOps as basically a marketing scheme. And now when people say DevSecOps, you think, SAST, DAST, SCA and all that.
And those are tools that might it probably do make sense as part of an effective DevSecOps program, but they are not DevSecOps. You haven't just thrown random SCA tool into your pipeline and solved it because it's like, it returning meaningful things?
So the first thing I would say is the minimum is challenging that notion of you can just buy products and integrate them into a pipeline because your engineers are going to hate you if they just blow up pull request pipelines and stop with things that are useless. You'll lose credibility very quickly.
We'll go all the way left. think as far as like secure design of products, I think you need to do your best to make the things you're going to ask for as clear and prescriptive as possible early on. So if you have logging requirements for your applications, for example, they need to emit some sort of audit event. That needs to be upfront, self-service to enroll in.
I like to point to, you know, like Amazon's internal or AWS's internal release requirements for services. Now, like for example, CloudTrail, although there might be some people out there who'll be like, well, CloudTrail wasn't always a release requirement, but I believe it is now. For example, just having those things upfront. So like an engineer that's building a, you know, an application or product or service that your company can go and know as early on as possible.
These are the things I need to meet. I think the second part of that is you need to take as much of your security posture and policy and enforce it as policy as code. Because again, that's the developer's feedback mechanism, or an engineer's feedback mechanism is like, how do I know this is good to go?
And that's not just unit tests or application tests. It should be security requirements too. That does not mean just shove a bunch of SCA tools in there. To me, that means you need to think about the things you want and actually either potentially build custom checks or tune things to match your org's posture.
And then finally, continuously get feedback of those things and improve them. So what I've seen is, obviously we'll have checks in place in a pipeline that are kind of more strict or non-negotiable. You might have some lower fidelity checks that are more false positive prone. Those shouldn't block a pipeline. Those might run and then go somewhere else that like a product security team or an application security team can manage and observe.
And as we find just new use cases, hey, this thing should be not allowed or this thing should be done this way, that gets baked back into that. So it's a kind of a continuous feedback loop, if that makes sense. But you're not going to just buy a product and that just happened, right? All that's got to be informed by your own risk posture, what's allowed and what's not allowed. I'll give you an example real quick.
I would even say something like, if you have a governance process for what cloud services are allowed, right? You could… you could take that instead of having it be in a Confluence page somewhere and then all cloud services are allowed anyway, that could be enforced in your pipeline as policy as code and in addition like an SCP as well. And when those things get approved on a per account basis, basically it's all baked in. your governance actually has teeth in what is reality is what we actually believe it is and not just Confluence doc here or spreadsheet here. And then what's actually happening in prod is completely different over here.
Host: Yeah, I think there are a few things which I really like. One is the feedback. I'll touch on that. I'll come to that in a second. The other thing, the key thing that you highlighted is like tools, right? Bringing in a tool doesn't solve your dev sec or shift left security challenge, right? It just shows you what are the gaps about how do you address them?
How do you go back one step and maybe do the architecture the right way, security requirements the right way so that when you run the tools, you have minimum findings. Of course, there is false positives and things like that with tools which needs to be addressed. But if you go even further, as you said, if you go even further left, if you do the security requirements or security architecture review, then there are less chances of finding more findings while somebody raises a PR.
And the feedback part, yeah, that's a great point because at the end of the day, as a security team, can just say, you cannot just say that, hey, here you go. I integrated a tool. There are 5,000 vulnerabilities. Go fix it. You need to work with the team with their feedback so that you can find the false positive. You can find out what are reachable. Maybe I'm just taking an example, right? What are reachable? What should be excluded? Like fine tuning that you can only do when you have that feedbacks loop with engineering.
Otherwise, they'll just look at it, there are 5,000. OK, we'll maybe look at it in future. And that never happens anyway.
Dakota: Yeah. I totally saw that. On that note, real quick, was just going to say one thing I advocate for is if we create less tickets and injects for engineering teams, they're more likely to actually do them when we do need to take some of their attention and time from what they're doing. It's literally alert fatigue that we see in the SOC, but it's just ticket fatigue, basically. If they get one and it's really well detailed,
It's like here, here's exactly where you fix it or we fix it for them, right? Whether that's at the platform level or give them a PR, it's more likely to get done. Sorry, just when you said that, like, I gotta throw that in there.
Host: No, no, it makes sense. Now, we spoke about engineering quite a bit and the practices and things like that. So early on, we touched on security engineering and product engineering in a way, right? And they work hand in hand.
So when it comes to security engineering, often there is also a career curve, right? Somebody joins fresh out of college, they learn and they get better because they are sort of an interface between the security objectives of the organization, security goals of the organization, and product engineering. How do you see security engineers growing in their career? How can they be mentored? So that they also have a proactive mindset when it comes to implementing some of the security goals.
Dakota: Man, that's a good one. So part of it, I think, is just driving the engineering mindset and thinking. Instead of when I encounter a problem, instead of trying to fix just that one instance of problem, I need to step back and think, is this a common pattern in our organization? I need to be looking for those opportunities where I could basically completely solve a class of a problem.
You know, like to call it SimGrep. know they talk about like eliminating classes of vulnerabilities. think, think beyond just like static analysis, like generally engineering, like, Hey, I'm about to do this thing. Should I do this one specific thing one off or is this something that the rest of the company needs? So I think driving that mindset first is one.
As far as mentoring, like I was, I was really lucky that I got to spend a lot of like direct times, like really good engineers and not just like security engineers. Like I got to hang out with a bunch of software engineers who taught me how to debug and think those things, how to break down a problem into smaller pieces. And then build stuff.
Just getting that mindset of exploring, oh, what if I did this? And unfortunately, I think you have to do it a little bit outside of work, because you need to able to do it in an environment where there's no consequences, where you could just build something and even just get 70 % of way there, learn new things, try new things, even if it doesn't work.
Like maybe you learned about a new tool or a framework or just like how to kind of think about systems. So I think it really comes down to just like your senior folks have to mentor those things into your junior folks, like how to think about problems, how to recognize what I'm about to do, a one-off thing, and maybe that it should be done somewhere else altogether.
And just kind of get your hands on the keyboard and kind of tinker, build, break things, mess up, get mad at the keyboard, walk away, come back. It's all part of process.
Host: Yeah, I think this connects back to what you said earlier really well, like the alert fatigue, right?
If you have to work on 100 tickets a day, you will not get a chance to step back and think about that, hey, can I apply something similar in, or can I change the process slightly, or can I log maybe something more so that we can get rid of some of these challenges that I'm noticing with this particular ticket that will help with other tickets and things like that.
So yeah, like being focused on a particular ticket versus thinking about overall impact, for sure that will help. Now, a question on that is, you touched on mindset, right? How would you find out that if when you are hiring, let's say, how would you find out that the person, like I'm interviewing Puru, whether he has the mindset or not for a security engineer that you looking for.
Dakota: I personally I think this is like a very controversial question because if you ask different people, you're going to get different answers. I love open-ended questions that ask you, hey, we have this fictional database system that produces these types of logs. How would you secure it or how would you deal with these problems?
You leave it open-ended because what you want to see is how they break down a problem into smaller steps. And also, what questions do they ask you?
Like, oh, hey, is it internet facing? Does it hold any actual sensitive data? How does it produce logs? And maybe that's a kind of trivial example. But I don't want to say system design, because I don't think that all security engineers need to do hardcore system design.
But if there's a security flavor of that, where it's like, here's a problem, and see how they explore it and break it down, that would be the way I go about it. And a good candidate with a good mindset would basically stop, try to break the problem down into smaller pieces and figure out what we're actually trying to solve for, and then start to dive into the technical, the engineering side of it.
Host: Yeah, so it's more of the approach versus the solution in a way, right? How do you approach it and how do you get to the solution than the solution itself in a way? I know that in engineering practices, that's often, sorry, in engineering interview processes, that's often used quite a bit, but it makes sense to apply that to security as well, right? So that you see the mindset of the person who you are interviewing. Makes sense.
Now we want to switch gears, like we cannot record a podcast without talking about AI. We are in the age of AI. So one of the challenges with AI is, or the LLMs is, they're not deterministic in any way, right? If you ask the same question 20 times, you will get different answers. It's not like querying a database and giving me an output, right? So it's trying to predict and things like that. From a security engineering standpoint,
How do you address this non-deterministic nature of LLM outputs? How would you design around it?
Dakota: Yeah, so are we talking using LLMs for security use cases or applications that use the security use case side? That's what we're talking about.
Host: I want to say the application, but yeah, if you have thoughts on both, why not?
Dakota: OK. I would say, mean, well, some of it comes down to if you're building tools on LLMs, there's frameworks out there. For example, like Pydantic AI, where you can do structured outputs and basically give it an enum of what exactly you expect out of this and reject it. So that's a guardrail.
But the thing that I would say is that And the whole point of an LLM is you don't want it to be deterministic. To me, that almost sounds, and I'm not an AI ML expert by any means. I've just played with them. That almost sounds like overfitting, right? Because I thought the whole point was it's good at problems that don't require determinism, right? And I think there's plenty of automation flows.
For example, I'm really excited for the application in SOAR tooling and engines. Because right now, to write SOAR workflows, you have to be very specific and precise. Parse this field out. If it goes here, it goes here.
But what if we could write playbooks that still have all the deterministic pieces but can handle multiple different types of alerts of the same class? Maybe you could have, I don't know, a basically LLM-backed agent for cloud credential theft. And it can handle cloud credential. It's got tools for log lookups to different cloud providers or different pre-canned queries. And there's still a human in the loop.
I guess kind of too pronged. There's things you can do to add context to the prompts and stuff to kind of help you get a better answer. But I'd also argue too that if you need determinism, is an LLM the right tool? Are we applying it to the right use case? If the answer is no, I think it's a great way to kind of connect different deterministic capabilities and tools. But I wouldn't go as far to say is you're ever going to make it completely deterministic, because I think that's the whole point of it maybe that's a spicy take, but
Host: Yeah. No, I mean, I agree that it's not a deterministic technology. But while building solutions with it, we often look for a deterministic nature of it. So that's where the question is. Now, keeping that in mind, when you are designing a tool, any architectural principles that you keep in mind that hey, I need to follow these five things or I need to achieve these five things like you mentioned about guardrails anything else there that you think.
Dakota: Man. That's, I would say one, I usually try to sketch like, cause I write all of the CLI tools and APIs and stuff. What I'll do is I'll try to like, or like modules, for example, like a Terraform module, I'll try to sketch what that initial interface looks like first. And like, so I have an idea of what I want and work backwards from that. The other thing I would say is like, start with the simplest thing first.
And then when you need something more complex, reach for it. Like don't, don't reach for like the fanciest, unless it's like a learning project or like kind of just fun learning project. Pick the simplest thing that can solve your use case. And then when you find a reason to move away from that, something more complex, go for it. I don't think I necessarily have five specific principles.
Usually that's what I look for is like, simplest thing, no, this doesn't work for me and I've got good reasons why. OK, next thing.
Host: Okay, so it's more like experimentation versus production, you should have some guardrails there. So speaking of CLI and different tools that you build, so you have recently posted about building an AWS Guard Duty Alert triage agent, right? And Guard Duty sends a lot of alerts. And in the post you mentioned about agent was biased towards a non-malicious assessment, unless it could have been proven otherwise. So when you are implementing such tools or such agents, what are some of the trade-offs that you think of in the system prompt or even in the system prompt, say.
Dakota: Man, that's a great question. I, cause when I was playing around with it, like I picked out a couple of different classes of guard duty alerts. And one of them I picked was the like anomalous activity finding, which anybody that's worked with guard duty knows that that is an incredibly noisy finding class and by nature, right? Because it's just claiming that, Hey, this is abnormal for this person.
And I'm pretty sure that's one of the ones that's based on a model they train behind the scenes. So I told that specific one to like bias towards basically non-malicious unless you have significant evidence to prove so. And it did just that. Now, I guess to your point, there's a risk that it could have come back and said, yeah, this is non-malicious and it was.
But the other argument, is could we not argue that a human would do that, too? Right? So it's tough because I think a lot of it, too, would depend on what type of context you provide to it, Right?
Like if you provide certain logs or certain business context. Again, this was like a kind of a PSCI built, so I really get to take it this far. But if you were able to provide context about the users involved in the event and maybe their main working location or something like that, that might help sway it. But yeah.
Host: So yeah, the funny part is when you mentioned that even humans make mistakes, but when we talk about a new technology, we often assume that, it can't make mistakes. It has to be perfect, right? So yeah, it makes a lot of sense. So.
Dakota: Yes. Yeah. I think it's, I'm sorry. I was just gonna say real quick, I think with any new technology, because I'm always cautious when things come out and they're very hyped. I think you have to be careful not to just throw out the potential valuable points of it because you're like skeptical because of the hype and it absolutely will make mistakes, right? It's a statistical model that's basically guessing the next token, you know, any ML experts, please, please don't be mad at me for oversimplifying that.
That's what it is, right? But that doesn't necessarily make it bad. So that's just my thought on that. Like, it's nuanced, if that makes sense.
Host: And that's how the innovation also happens. If you remember the first version of ChatGPT, there was so much hallucination and things like that. But if you look at that today, it has gone down significantly. So yeah, I agree with you 100 % on any new technology. Of course, you have to be skeptical a little bit. But at the same time, since you are a skeptic, you should not just completely say that, this won't work.
You have to play with it to figure out what will work or what will not work. Now, one last question is, how do you ensure that the agent doesn't miss? Like you gave anomaly as an example. That's a very good example. When you are building the agent, how do you ensure that it doesn't miss a clear pattern, malicious pattern versus… yeah, it doesn't miss a pattern. Is it more training? Is it more like how do you plan around that?
Dakota: Yeah, and so I've started to play around with that idea a little bit. So a lot of the agent frameworks have this concept called evals, which are supposed to be basically like unit tests for that, where you inject a bunch of context and you would test the expected result. So that was one thing I haven't actually gotten to yet that I was thinking of. Could I basically inject a scenario where I would want it to return that and then run the tests, evaluate it that way?
I still personally think it all comes down to the context. don't, it's such a hard problem too, because like, I feel like this problem is unsolved with you. like, cause it's like, you go too high towards biasing towards, you the true positives are going to get flooded with false positives. go bias the other direction, you're going to miss stuff. so I, my mind goes towards basically like ensuring that the agent keeps returning us as certain level quality of responses.
And also, I guess I would say too, is that all of this would still very much be human in the loop. It's a bit to augment, especially where it's at right now. Call it interesting patterns, summarize things. Not like I wouldn't have it dropping anything personally, especially if something's that critical. I think that should be basically deterministic, whether it's your rules in your SIEM or your automations. And then that can be fed to a triage agent. But yeah, I don't know if that directly answers the question, I like it it's such a tough problem even without now LM triage or LM tools in the mix.
Host: So yeah, human in the loop, we hear about that quite a bit, right? Initially when LLMs were rolled out, there was a lot of concern that, we'll all lose jobs and things like that. Looks like there will be humans in the loop. So we have jobs at least for a few years. With that, I think we come to the end of the security questions.
But before I let you go, I have one question around any recommendations it could be a blog or a book or a podcast, anything that our audience should go and learn more. It could be security related, non-security, AI, like whatever is on top of you.
Dakota: Man, I would say like book-wise and cause I, basically, I almost I was forced to read this book, but I read this book at a book club and a company I was at a couple of years ago called the pragmatic programmer. And I didn't initially come from like a software engineering background. It was kind of like infrastructure security and then kind of had to learn that stuff myself, but it's basically like a color. It almost reads like a collection of blog posts and like how to build an engineer things.
And even if you're not like a software engineer, you could take those principles. Like there's one in there that talks about orthogonality or basically like loose coupling, like, two separate pieces, two separate systems shouldn't have knowledge of each other's internals. And like, that's huge for like, it's like designing systems. Like even, if, even if it's not like code, it can be processes or Terraform modules or like how we basically engage with others. definitely recommend checking that out.
And then I don't know the other thing, maybe this is a cliche thing is just go build stuff. Mean, like today, I think it's getting easier than ever to like prototype and build and play around, even if it's just fun. I'm sure live coding is such a dirty word depending on who you ask, but like go experiment, go build, go learn. When it breaks stuff, go Google it and then just have fun with it. That's what I would say.
Host: Hopefully folks Google it or go to chat GPT and say that, here is my error. How do I fix it? No, totally agree. No, I totally agree with getting your hands dirty, right? Because that is the best way to learn. And today things are much simpler than like five years ago where you had to read a lot of blogs, a lot of documentation before you can even start.
Now you just go and write a prompt that, I need to...
Dakota: Don't ship it straight to prod.
Host: Create an application and everything gets created. I think I recently saw an ad of a startup which where you use Siri and say that, create an application for me which does this and this and it creates a mobile application for you. So, I mean, things have become so easy, right, nowadays. So yeah, getting your hands dirty is the best advice I can think of for our audience. And with that,
We come to the end of the podcast. Thank you so much, Jakota, for joining the podcast. It was a fun conversation.
Dakota: Likewise, thanks for having me.