Understanding the Role of Asset Management and Kubernetes in the Cloud with Kesten Broughton


  • When it comes to asset management, always start from outside in, like DNS, to understand your dangling subdomains or public IPs, which are sort of overexposed and work your work towards inside of your infrastructure.
  • For asset inventory, get data from multiple sources so that you can use that to enrich the data that you have in your data lake.
  • Security teams should understand the point of friction with engineering teams and enable them to move faster and roll out more and more features.


Host: Hi, everyone. Welcome back to another episode of ScaletoZero. I'm Purusottam, co-founder and CTO of Cloudanix. Today's topic includes asset management, asset management in the cloud, Kubernetes, and what role it play when it comes to security.

And to discuss on this topic, we have Kesten Broughton with us.

He presented a talk at this year's Forward CloudSec with colleagues from autonomous driving company Nuro about cloud asset inventory, particularly in GCP. Earlier, like in 2018, he gave a talk, All Your Trusts Belong to Us, at Forward CloudSec again, about his research on confused deputy problems with AWS for cross-account assumed roles with SaaS vendors.

He has spent three years developing the cloud services line at Pretorian, as well as pen testing, IoT, web apps, DevOps infrastructure, Kubernetes, and consulting on secure design reviews. He has a degree in math and physics and has spent over 12 years in software development, DevOps, and security.

And fun fact is once he rode his bicycle all the way from Vancouver, Canada to Chiapas, Mexico. So there are two things that I want to ask. Thank you so much for coming to the show. Did I say the name correctly? The cloud services line company was one and then how many days did it take for you to bicycle all the way from Vancouver to Mexico?

Kesten: All right, yes, you got it right. Pretorian was the sort of cloud security pen testing role that I had for several years. That's a company in Austin that does broad-spectrum security testing. And how long did it take?

Well, I left Vancouver September 13. I arrived in Austin on December 13. So I was stopping every couple of days and working on farms and things like that.

So three months is a long time. You don't need to spend that long if you're riding that far. Then I jumped on a bus to get to Northern Mexico and rode another four or five weeks in Mexico to get from Northern Mexico to Chiapas. So it took quite a while.

Purusottam: Wow, yeah, must have been fun as well like fun and rewarding, I would assume it is for sure challenging but.

Kesten: It was really, really great. Yeah. The weather probably was the most challenging. I was being chased by winter all the way down. So I went down through like Idaho, Utah and Colorado and points. Uh, yeah. Uh, I was getting caught with snow on the ground and I pitched my tent and wake up in the morning with a big kind of outline of my body where it melted all the snow.

Purusottam: That sounds fun. Thank you so much for coming to the show. For our viewers who may not know you, do you want to briefly share about your journey?

Kesten: Sure. All right, so yeah, I think, like you mentioned, I started off in software development. Actually, my first job out of graduation was working on a video game company, working on the physics engine for a video game company. So I did that for a couple of years. Took a long break. Did some farming and traveling and things like that. When I got back, it was in Austin, and I did some DevOps work for several years.

And then eventually gradually I shifted into security when I moved to Praetorian. Really enjoyed that work. I think it's a great experience for anybody in DevOps to spend a little bit of time on the offensive end where you really try and attack and pick apart a lot of different things, because on the DevOps side, you're more traditionally, you're going to be working on a few things for a long period of time.

You've got a lot of legacy stuff. Whereas in the consulting world, you pick up one thing for two weeks, and you have to learn all about it really fast and become an expert in that, and then you move on to something completely different. So I think it's a really great complement to doing DevOps work. And then eventually I felt like, okay, well, I've got these new skills now in security, like on the offensive side, I'd really like to get back into building some more. I kind of find that a really creative process. So that's sort of what led me back into the sort of...security plus DevOps side of the house, I guess, DevSecOps. And yeah, just recently I made a transition from Nuro to Crunchyroll, where I'm working on the security infrastructure team.

Purusottam: Lovely. Yeah, I'm looking forward to like learning some of it as part of today's discussion.

So let's start with the security section, right? And I generally ask this to all the guests and we get different answers from everyone, right, because everyone's day looks very different. So,

How does your day look like today? Typical day.

Kesten: All right. Yeah, so I'll give you the sort of like a snapshot of my current life, which is joining a new company. I find that's like one of the most interesting thing because I like to go very broad at first and kind of understand how the company works. What are all of its assets?

Actually, it's one of the times when I really focus on asset inventory because I wanna understand what are all the things that are under their control. And, you know, very often you find that the security team might be really focused on the dominant cloud, but they actually have a footprint in major clouds. And those might be neglected because they're not part of the main engineering thrust, but they are actually pretty important because I've found in the past that very often the breaches come from those non-dominant clouds that aren't as heavily scrutinized. So

The other thing that I like to do a lot is I hang out in the org chart a lot. So I'm meeting people all the time. Those first few weeks have a lot of one-on-ones as part of the onboarding process and I like to get some context on, you know, where they are in the organization.

Also just keep referring back to that org chart to understand like the structure of the company at this point in time. So I do for a bit of that. And then try and join a lot of channels. So, you know, if it's Slack company, then join as many Slack channels as I can, where we might have a role to play as a security team.

And what I'm looking for is usually signs of friction. So where are things fragile? Where are things being slowed down? And very often, you'll find that there's a bit of an intersection. Developers are really good at writing code. But when it comes to deploying in the cloud, very often, there'll be issues with the identity-wear proxy or getting lease privilege or rolling it out through Terraform or things like that.

And so hanging out in various channels where you can absorb that information and understand what is slowing this engineering team down and then what those things overlap with security. And that's my role is to make those pain points go away in a way that ends up a more secure deployment process.

Purusottam: Sounds like you have your hands full. You are doing quite a few things Like in the early days at Crunchyroll already So let's start with the asset management, right? You highlighted that that's one of the things you focus on early on as well. So one of the quotes from Daniel Meisler from his blog says “Asset management is arguably the most important component of a security program. But I know virtually zero companies that have a security person dedicated to it”.

Can you help our audience? What is he trying to highlight here?

Kesten: Sure. So one of the most basic questions is, we have an IP, and we want to know, is it ours might be the question, or is it malicious, or does this IP live in GCP or AWS? Your cloud infrastructure, is it an on-prem IP? Something like that. And so a good asset inventory will be able to answer that question very quickly. And if you don't have a good asset inventory story already

That can take a day or longer, you know, because you have to find out, you know, like it could be coming from so many different systems. You have to find the owners of those systems, get permission to get in there and look around. You might have to rely on other people to do these things. And so having everything built into a nice, you know, data lake or a data warehouse or something like that, where you've got all of your feeds coming in and you know that if it's in there, you'll find it with your query within a matter of seconds.

That's what asset inventory is all about, is being able to answer questions about your organization in a really short period of time.

Host: Okay, and so one follow-up question to that is like you, you took a very good example, which is IP right? When you are getting that data and ingesting it into your data lake, are you trying to find bare minimum data sets, and metadata around it or you're trying to get as much as you can so that you can use that later for correlating with other assets or other owners of the system.

Kesten: Yeah, so typically you want to have feeds that are, you can do snapshots for things that are slow, like a static IP on an EC2 instance or something like that. It's fine to have a daily snapshot. If you're dealing with IPs attached to Kubernetes, it has to be much more kind of a streaming type thing. But the thing that I really like, and this is why I often build my own tools, is being able to enrich that data.

So, if it's an IP, you want to know is that IP is...to get it with a load balancer, is it protected by your CloudFront or Vercel or whatever it is that you have at the edge or a CDN or something like that? So you do need to be able to string all of these things together to be able to answer the question more completely, as quickly as possible. So yeah, that's where a lot of tools fall down if they only have an interface that allows you to ingest and then retrieve the information as it was put in then you don't have the ability to enrich it. And typically that enrichment process is gonna be so different from one company to the next that it won't be out of the box. You need to do some of that work yourself.

Purusottam: Okay, so like enrich as much as you can so that it adds more value when you are querying the data later on. So one question that comes to my mind is at what stage of a company should I start thinking about it? Should it be from day one or let's say if you're running a startup, you hit a certain point and then you start thinking about it. What's your take on that?

Kesten: One of the things that is always a sore point in asset inventory is the third-party SaaS vendors, which can be considered part of your asset inventory. So it's not compute that you own. It's things that are cloud services, but you're granting them access into your systems. And so you need to be able to say, if this vendor gets popped is that some vendor that we use or not?

So that's why it ties into asset inventory. You need to be able to answer that question very quickly. And I found at almost most of the companies that I've looked at over time is very few of them can answer that question. So there aren't a lot of companies that can give you that list. And part of the reason is that list is usually 300 to 500 long these days.

Things have changed a lot in the last 10 years where you used to have this sort of very strong sense of we have a perimeter. And now that perimeter is so full of holes, it's so Swiss cheese because everything is connected to the web.

That means that it's a much harder problem. And so that would be the one that I would focus on if I were doing a startup. I would keep track of that. Also for other reasons, for cost reasons, making sure that very often you'll have a champion of a tool come to the company and they'll get it going and stuff and you're paying...$100,000 a year for it or something like that, that champion leaves and now you're not squeezing the juice out of that thing anymore.

And so you wanna have this as a process where you make sure that every one of these SaaS tools has an owner, has a champion, that you can justify the cost, that you're getting good value out of that tool and that you're not needlessly duplicating. Very often there'll be two tools and you have both of them and from the outside it looks like, well, these two tools do the same thing.

But when you get down to it, there's nitty gritty reasons why you have like, it's the right tool for this job, and then the other tool fills in some other gaps or something like that.

But then when lost, for lack of asset inventory or a process, is when to choose those for other people who come to the company. If I have the ability to front with a CloudFront as my CDN or as a CloudFront, could be one of the many other options for a CDN that is not part of the cloud.

And there's reasons that you might need to do that. What's lacking very often is that decision process that will guide the next person that comes through that doesn't see, oh, I can choose one or the other at random. There needs to be a really clear process of why you choose one versus the other. So that would be for an early stage company.

I would say make sure that you have a good grasp on your SaaS spend and that you're getting value out of it, and that you can explain if you have two things that look like they're doing the same thing, when to use one versus the other.

Host: Yeah, so that's a very good point. I can relate to it as well, like the champions, SAS champions that you highlighted. That often happens that, hey, there was a design process followed to come to a conclusion. And there was a champion who was owning it. Once they leave, either the new person who is owning it is not clear what were the design decision criteria.

And they either do not use it anymore and they look for a new tool while they are paying for the first tool as well.

So yeah, like you are absolutely right that having the decision process documented, handover between the owners will make sure that the tool is getting used for the purpose it was sort of acquired, right? So yeah, spot on.

On the asset management, I want to ask one more thing, which is...Not all assets have the same significance, right?

Let's say an S3 bucket storing sensitive data versus storing internal logs or a GCP instance which is connected to the world via a public subnet or something versus instance in a private subnet.

So when you are going through, like when you are starting the asset management program, How do you prioritize which assets to track and which ones to maybe ignore in the first version and maybe go to that next?

Kesten: Right. So I usually start from the outside in, and that typically means DNS. So DNS is an asset in the sense that they call them properties in some DNS managing tools. That is typically the way that most people are going to get into your resources. And so very often, there are dangling subdomains. So somebody creates a record that points to an IP in a cloud.

And that IP is somewhat ephemeral, right? You don't have that forever if you turn off the service, but forget to remove the DNS. Now you're pointing at a public IP. And turns out that people have done this. It's not that difficult to write a program that will just loop through and request an IP and throw it away, request an IP and throw it away until you found that one that is being pointed to. And now you have like companyname.com points to an IP address that you own from that.

From that standpoint, you can now do all sorts of attacks and use either the SEO that was associated with that page to kind of like boost your traffic to that endpoint that you now control or launch attacks because there's some trust between a domain and its subdomain or things like that. So it's one of those things that is important to keep a handle of.

Another one, situations where there's often a lot of spread and you don't have a good control and it's like, very common that you will have to do some work to track down, oh, where does this domain, where is it managed? Like where's the registrar? And very often you'll find if it's a younger company, oh, it's still in the name of the founder or something like that. And like, he doesn't check those emails, right? If there's a warning that your certificates about to expire or something like that.

So there's often kind of like a corralling that needs to be done around your DNS management. So that's one of those ones that I think sticks out early on that you need to be able to understand that.

Also, if you're doing what I mentioned, like hanging out in the Slack channels and looking for things, you need to know where to go and look and see, okay, they'll usually give you a URL which has the domain name in it. And now I wanna start at the top. I wanna start at the outside and see, do we have a WAF in front of that? Do we have CDN or bot protection or anything like that? And all that information is easiest to get if you have a good understanding of the DNS.

Host: Yeah, so you highlighted a very good one because it's very common to have dangling DNS entries which are mapped to IPs which at one point you owned but now have been released. So they're open and it's fairly easy to get that information as well.

Like there are many open source tools which you can run to on any domain because DNS information is public, right? So you can run on any domain and then you can start sort of, as you highlighted, like write a Python program to see whether it has a mapping or not. And then you can use that for your attack, let's say, right.

So you start from external attack vectors and get like go inside from there.

Kesten: Mm-hmm. That's right.

Host: Makes sense, makes a lot of sense. Now, I want to change, like slightly increase that scale. So nowadays, everybody is moving to Kubernetes. A lot of folks, let me say, moving to Kubernetes. So how does this change when it comes to the Kubernetes world?

Because a lot of the workload that we create in Kubernetes is ephemeral. And how do you track in that case?

Kesten: Right. So the talk that we gave for CloudSec was about how we leveraged Google's Cloud Asset Inventory. In AWS, that would be config, AWS config service. And so what we're doing right now is just to take a daily snapshot, and that is sufficient for most things, like DNS records or EC2 instances or your VMs and things like that.

The things that don't change that rapidly and capturing it on a daily basis is fine. Things in Kubernetes are completely different. And there you really will be out of touch if you're trying to debug something or troubleshoot something. Very often those things will be less than a day old.

And so that's a case where you need to either snapshot very frequently or a better way that is supported, I think by most clouds is to use a feed. And so the thing is the cloud has all of this state, right? So it's just how do you ask your cloud to do it?

And the wrong way we argue in our talk is to, you know, if you've got 200 accounts and maybe a hundred Kubernetes clusters spread out all over the place, to do a for loop over every cluster. And then for every cluster, do a for loop over every whatever namespace or, you know, however it is that you need to go and get all your results and you might have to paginate the results out and everything like that. It's a big mess and it takes a long time.

You might end up DDoSing your system because you're making so many calls to the APIs. There's like all sorts of things that can go wrong. All that data is there in the cloud. So it's really up on the cloud providers to expose that information in a way that it is easy for the end customers.

And so we did half of the job at Google, which was getting the slow moving stuff out of Cloud Asset Inventory. GCP has Cloud Asset Inventory feeds as well, which would be the next step is tap in those things that, remove them from the daily snapshot and now just use feeds so that you're continuously getting the updates and you have that same state information that the cloud has on the backend. That's what you really want.

Host: Okay, so continuous feed. Let's say you have a webhook or something you have configured so that you are constantly getting the data and then you feed that into your data lake so that you have the latest information.

Kesten: Yeah, I think the clouds that I've looked at will have a Pub-Sub. So you can choose the cloud acid inventory to dump to buckets or to send to a Pub-Sub feed. And that's where you would trigger off of that and update your database on the back end.

Host: Yeah, so I have seen some folks also using PubSub push it to, let's say, BigQuery or something so that they can query later on using that data. Makes sense.

Now, when it comes to like, let's say Kubernetes or cloud-native environments, it relies on automation quite a bit, right? Which can sometimes make it challenging to ensure that the security measures are applied correctly. Let's say there is a misconfigured automation then that could introduce vulnerabilities, which could be difficult to identify and remedy.

So what role does automation play when it comes to DevSecOps?

Kesten: Right. So for me, and I might have a slightly different opinion from the majority out there, for me, automation can be overdone. So it's super important in cases where you have multiple environments, dev, stage, prod, you need things to be the same. It's super important with a lot of actors working on it at the same time.

So like, you know, a large team that uses the same environment. Again, super important to have that change control so that people don't step on each other's work. And it's super important if you're going to be frequently updating it, because again, that means you're going to have a lot of change and you need to have more change management.

I think it can be overdone in that if you're setting up, you know, very often on the security team, for example, we don't have DevStage Prod. We're just making sure that we get all logs to a central system.

That can be done in many different ways, and very often it won't be presented in a Terraform or something like that. If it is, that's great, that's what I'll use. But if it's gonna be extra work to go from deploying it, which might take an hour, to making it really robust and an excellent sort of IAC story, to me, that's where you might be spending a little bit more time on your infrastructure's code rather than doing more security business logic type things.

And yeah, and the thing is Terraform models the cloud's state, but the state has the, the cloud has the true state and the Terraform model of it can drift out of date. So as providers get old or, you know, there's new updates to providers, I've seen things as simple as, you know, it used to be the default was an empty string and now the default is none.

And that change means that even though I didn't touch this deployment, now it's broken and I have to spend time to go and fix it. And if it's a mono repo of Terraform, there might be a backlog of like three or four other things that are broken that need to be fixed before I can fix my thing. That's where you start to feel like, oh, is this really providing me the extra value?

I have my asset inventory in the cloud. I can query the cloud directly and I know it. What is the change management step there?

Well, if this is only gonna be done once, if you're only gonna like deploy it and forget it because it works and you're great,

That was our experience with Cloud Asset Inventory. We deployed it with a Cloud Function. It was done. We actually did do it in Terraform. But it's one of those cases where I think you could just deploy it manually or deploy some G Cloud commands or something like that. You're never going to touch that again for probably at least a year.

And if you are going to spend more time doing infrastructure as code, putting it in Terraform, that might not be your best use of time. Because like I say, there are things with Terraform maintenance over time, it's not a zero maintenance effort. And so if you're gonna spend more time maintaining it because it's in Terraform, as opposed to, you know, some other deployment method, then I think you should consider those other deployment methods.

Host: Okay, so yeah, it's definitely like you have a different perspective compared to others, because most folks say that yeah, we should automate everything.

But I see your point where let's say either the cloud provider has changed some of the base, some of the parameters as you said, right?

Earlier it was empty string. Now it is none, or the terraform modules have got outdated, you need to change the parameter or something like that you just end up maintaining it, even though it doesn't add any value to the business or from a security perspective either.

So yeah, it makes a lot of sense.

So now the next thing that I want to talk about is often security-related tasks when it comes to, either automation or even doing things manually, right? Like asset management, which can be automated as well, is often seen as roadblock to business growth.

Because often security teams are seen as, hey they're going to block us from moving to production or something like that.

So how can security teams work with other business units?

So that they can show that, hey, we can help the organization to increase revenue or improve the bottom.

Kesten: Yeah, absolutely. I mean, one of the primary things we can do is help engineering go faster. And so typically, if a security organization gets their reputation of being the house of no, then it won't be able to play that role as well.

And so one of the things that I'm doing right now, for example, is I'll be looking through the logs and looking for permission denied errors, and mine that information and try and understand, oh, do we have a lot of permission denied errors shouldn't really be happening in prod.

So it's a very interesting thing from a security perspective. In prod, you shouldn't really have also queries in general, like get IAM role. Who is needing to ask the IAM role permissions in prod? That should all be known in advance. It should all be part of your IAC.

Yes, you should be deploying with here for me into your prod. But for other environments where you do have those questions going on, it's nice to reduce the noise. And so very often, you're going to have I find that there is a lot of noise around IAM because people deploy things, they change things a little bit and now it doesn't work or they're trying to follow along a blog and they can't get it working.

That's a lot of friction and that's something that is security adjacent. It might be closer to SDLC, the Developer Engineering Experience, but it's a place where security can really help by creating like, this is the way to do...IEP, like identity-aware proxy.

And that's something that is very common in any engineer who has to do an admin page that's only available to the backend engineers. That sort of thing, making sure that it's very clear there's like one very well-trodden road that's like published as the documentation of how to do it.

And then there's like five or six different ways that your engineers would like to be able to use it. And that's where they are discovering a lot of friction in like implementing that. And so if you can develop out like here are the many different ways of doing, you know, an identity-aware proxy for your backend admin panels, then now you can reduce, you're actually gonna reduce the amount of calls that you see of these errors because you've solved that problem and you've made it, like you've then given that over to the engineers and they're able to do it much more quickly.

So I would say that's one of the main things is like areas where it is a security thing, but it's a friction point for deployment. Because we have so much now on the developers that is closer and closer to deploying. Those are the things to really look for as a security engineer, where you will make a lot of friends, if you can speed up these times.

I think security engineers should be very aware of Dora metrics or whatever it is that the engineering team uses to measure their velocity, because we should be looking at those as well, and making sure that those things that we ask them to do don't negatively affect those Dora metrics.

So the big one there that would affect it would be a code scanning tool. So where if you put a gate of code scanning in front of deployment, you've inserted yourself in what should be a very tight development loop and should be very efficient. And so you have to find a way, and maybe that is doing a post-deployment check. And yes, that does mean that you might have deployed some vulnerability for a few hours.

But if that means that over this course of a year, you're able to meet two or three more sprint objectives then that's probably going to be worth it, unless you're in a domain where you don't have that tolerance for risk.

Host: Right. So I think I liked how you started that ultimately like security teams should help engineering team move faster, right, because that's what helps the organization to, let's say, increase the revenue or improve the bottom line and find the friction between security and engineering and enable or help the engineering team understand how they can move faster while also keeping security in mind.

The thing that you highlighted, like showing, let's say, vulnerability information post-deployment rather than pre-deployment.

Of course, if you have the tolerance for it, then maybe that can still not block the engineering team from moving forward, but at least can help them understand the priority of those vulnerabilities for, let's say, next release.

So that way, both the teams have wins out of it.

So another question that comes to my mind, and again, around asset management, in your recent presentation at Forward Cloudside, you shared some learnings. Your colleagues shared some learnings on asset management.

Where do you see asset management capabilities provided by, let's say, cloud provider? Let's say the GCP Cloud Asset Inventory or AWS Config. Where do they fall short?

Kesten: Right. So with both GCP and AWS, they provide a GUI for you to be able to go and do queries. But it's quite limited. It's not like a full SQL query. And you can't do joins, which is like the thing that I mentioned earlier. You really want to enrich all of the asset inventory information you have so that you can answer more interesting questions.

And so in both cases, the cloud providers give away to dump your asset inventory from the service into, for GCP it's BigQuery, for AWS it's Athena. And so that's the right way to go. So you remove that limitation on your ability to do queries and join data when you put it into a proper querying engine.

And so that's one of the main things. The other place where they fall short is, I guess the next important thing is now that you've found the asset, now that you've found that IP and what VM it's associated with,

If you want somebody to fix it, you have to know who is the owner. And so that's where, you know, you, that's one of the next steps is enriching your data set with ownership. So that could come from your team or your infrastructure's code. It could come from your, um, uh, your GitHub repose, you know, the code owners file, you might be able to map that to infrastructure in some way. Uh, and so, uh, it might be, it would come from tagging, actually.

That's one of the really like underutilized things in, uh, in cloud right now. Not many companies have a strong tagging story. But that's one of the places where, yeah, if everything that you deploy has a tag of an owner, then you now have it right in the place where it should be. Like, the cloud knows who the owners are to go and fix things. And you just have to query that. If not, though, then you have to enrich that with, like I say, some of the secondary information that you might have. Those are kind of the main places.

And then there is a higher level of intelligence, which comes from connecting different resources. which is like the graph layer. So you, you know, if your typical asset inventory looks like an AWS spreadsheet, the graph layer puts it into a spider web. And now you're able to like wiggle over here, this IP attached to this load balancer is attached to this, route 53 is attached to this, you know, and so on and so forth.

And that can really help you if you are trying to do some end-to-end type stuff, like a more holistic thing, like, you know, blocking bot traffic. You need to know all of the stages that passes through on the way to your origin server so that you can intelligently tweak all the different layers in between. And that's where having a graph view of your asset inventory really helps a lot.

Host: Makes sense. So one follow-up question is like you highlighted a few things, right? Like let's say either you use, let's say tags to understand owners or code owners file or even use the Terraform code to understand that. So there is some sort of effort needed from let's say DevOps or engineering so that you can enrich the asset information, right?

So what type of effort do you generally see that teams need to consider when they're let's say going through sprint planning or something like that to work on the asset inventory or asset management?

Kesten: Right, so the right, I mean, it's good to consider these assets at the planning phase for sure, especially if you're introducing a new service, like in the cloud that hasn't been used before that needs to go through a security design review, or if you're introducing a new connection to a SaaS provider.

Those are two common things that you definitely want to get through. And that's good that your engineers now know that these are parts of the assets of a deployment system that you can use to build your own systems require a security review and they know to ask for it.

But I think actually blocking or monitoring these things can be done at the tooling level. So you can have a linter that will refuse to deploy something if it doesn't have a tag on it, for example. And so it's more like convince everybody that this is a worthwhile effort, that it makes sense to know who the owners of your assets are, and then build that in as a new requirement.

You might not fix everything backward compatible-wise, but you might say every new deployment and you build it right into your Terraform modules or something like that so that it has a tag there and that's part of the thing that is a required component.

And now you have it built into your tooling. You don't have to nag people about it. You just have to get the buy-in first to make sure that it's worth the little bit of engineering effort to get you there.

Host: Yeah, makes a lot of sense, like showing the value, convincing them how it adds value, and then going one step at a time so that it gets added as part of the engineering pipeline. Yeah, that's spot on. And yeah, that's a great way to end the security questions section as well. So let's go to the fun part, which is rating security practices.

Thank you so much, Kesten, for joining with me and sharing your insights with us. It was a fun conversation. Here are a few important points which stood out for me.

Rating Security Practices

So the way it works is I'll share a security practice and you need to rate it from 1 to 5, 1 being the worst and 5 being the best, and if you want to add some context, you can do that as well. Okay,

Granting users unrestricted access to systems and application so that development can move faster.

Kesten: Right. I'm going to use this one to be my oddball one as well, because I think that as long as it's not prod, you are probably doing this to some degree. And so rather than trying to whack-a-mole it away, find a way to actually support it. So this one, I'm going to say like a three, because normally it sounds like you would want it to be a one. You don't want to grant people access to prod.

Absolutely. OK, so one for prod. But outside of prod, I would say it's more like a three because I think the right approach is to find a way to support unrestricted access. I find that even when I join a new company, if I want to deploy something, I often don't have the privileges I need. And it takes quite a long time to get set up to have those privileges. So I just use my own personal account for doing a deployment of the rough infrastructure following a blog or something like that where I am the complete owner of everything.

I think in an organization, the way to do that would be to have a service control policy or something like that, or some automation that just annihilates everything on a weekly basis.

So you have complete, everybody has their own AWS account or everybody has their own GCP project where they are owner, start from there, and then just make sure that it doesn't accumulate that this like very one-off kind of wild west zone doesn't ever touch your data that's important and doesn't ever become a dependency, like because you don't have time to move it into a proper process.

No, you have to have some way to like completely annihilate it. But if you can do that, then actually granting people unrestricted access to systems is a great thing. It makes developers go faster. You just need to do it in a controlled way.

Host: So the standard architect response, it depends on the environment. If it is broad, then one, other environments, maybe three. No, I can totally relate to it. Yeah, makes sense. The next one is

Conducting periodic security audits to identify vulnerabilities, threats, and weaknesses in your systems and applications.

Kesten: Alright. So I think there's great value in having a third party come in and kick the tires. Uh, but they, they can be expensive, $100,000 plus, uh, for a good third party pen testing, and so you really want to make sure that you design your pen test to get value out of it. And, uh, very often, you know, it's a checkbox exercise that you need to do for, you know, uh, compliance, uh, program or something like that. In that case, uh, you know, it might have the value just out of that ability to check the box.

And so you aren't really that interested in the results. That's unfortunate because it's not a great experience for the pen testing team. It's not a great value for you, but it's worth $100,000, whatever. I think much better is if you can think about, how do I design this annual pen test so that it fits the criteria for our checkbox, but so that it adds something that we don't know yet? And so if you already do...a mass scans or a spider foot or something like that. And you already have a good feel for what your tax service is from an external perspective.

You probably want to think, okay, how do we give them some internal perspective, maybe let them review our work or something like that. But, you know, catching the external pen testing team up to date on where you see this isn't a bad way to start.

As a pen tester, you know, working for them for three years, very often the company wanted that complete black box experience. They would just want to know what can an average hacker out there on the web find. The problem is there's like way more hackers than you can get coverage for in that like, two week period that you have the engagement for. There's a lot of creativity out there.

A bug bounty program is better for that sort of thing where you get a lot of different types of backgrounds and a lot of niche attackers who just attack their niche against one and another.

And so, yeah, I think these things are really important. Internally, it should be a continuous thing. So as much as possible, you shouldn't run a scan once a month or once a quarter or something like that. It should be more like, what is preventing me from just making sure, in a streaming sense, I've got a good baseline, and every time I deploy something new, I do the check on the way.

Host: Yeah, makes a lot of sense. The third one that I want to ask about is providing training and awareness programs to employees to help them identify and respond to potential security threats.

Kesten: Yes, this one I give as a one, but it's really important that you think about it. And you should probably not have a one-size-fits-all all-security training program, because it's gonna bore the engineers who have like a fair bit of experience in securing certain types of things, especially data and that sort of thing. And it's gonna be too much or not have enough coverage for other people who are in finance.

Like there's a lot of really interesting attacks that go on in finance in like, Forge, you know.

I'm thinking of the word in Spanish because I've been speaking a lot of Spanish lately.

Factories are like bills. People will like send bills to finance departments. And, you know, big companies like Google have paid hundreds of thousands of dollars out to people who have no business. You know, they just asked for money, essentially, right?

So like each department has something like that where you need to work with them. And if you put all of your, you know, please review our security resources into one basket. You're not maybe going to have the time for that team later. Legal is another really important one.

There definitely needs to be a lot more work, joint work between legal departments and security departments. There's so much overlap there, and they're very often siloed. So designing things that are right for each department, I think makes it a one. And if you're not doing that, it's probably a three.

Host: Okay, so make sense that you cannot have like I like the thing that you highlighted it that you cannot have one size fits all training depending on the experience level depending on the team you are speaking with the training should be tailored that way right?

So yeah, you are spot on that. Yeah, so that brings us to the end of the episode.

Thank you so much, Kesten, for joining and sharing your learning with us.

Kesten: It's been absolutely great talking with you. Thanks a lot for inviting me.

Host: Absolutely and to our viewers, thank you for watching. Hope you have learned something new. If you have any questions about security, share them at scaletozero.com. We'll get those answered by an expert in the security space. See you in our next episode. Thank you!

Connect with Kesten: https://www.linkedin.com/in/kesten-broughton-70318026/

Connect with Purusottam: https://www.linkedin.com/in/mpurusottamc/

Workshop on Attacking a Kubernetes Cluster | Scaletozero
Live and practical experiment session on attacking the Kubernetes cluster. Watch our live interactive session with Divyanshu Shukla! Visit now!